Your browser doesn't support javascript.
Montrer: 20 | 50 | 100
Résultats 1 - 2 de 2
Filtre
Ajouter des filtres

Base de données
Type de document
Gamme d'année
1.
medrxiv; 2022.
Preprint Dans Anglais | medRxiv | ID: ppzbmed-10.1101.2022.03.16.22272469

Résumé

The COVID-19 pandemic has highlighted and accelerated the use of algorithmic-decision support for public health. The latter’s potential impact and risk of bias and harm urgently call for scrutiny and evaluation standards. One example is the early detection of local infectious disease outbreaks. Whereas many statistical models have been proposed and disparate systems are routinely used, each tai-lored to specific data streams and use, no systematic evaluation strategy of their performance in a real-world context exists. One difficulty in evaluating outbreak prediction, detection, or annotation lies in the scales of different approaches: How to compare slow but fine-grained genetic clustering of individual samples with rapid but coarse anomaly detection based on aggregated syndromic reports? Or alarms generated for different, overlapping geographical regions or demographics? We propose a general, data-driven, user-centric framework for evaluating hetero-geneous outbreak algorithms. Discrete outbreak labels and case counts are defined on a custom data grid, associated target probabilities are then computed and compared with algorithm output. The latter is defined as discrete “signals” are generated for a number of grid cells (the finest available in the benchmarking data set) with different weights and prior outbreak information from which then estimated outbreak label probabilities are derived. The prediction performance is quantified through a series of metrics, including confusion matrix, regression scores, and mutual information. The dimensions of the data grid can be weighted by the user to reflect epidemiological criteria.


Sujets)
COVID-19 , Maladies transmissibles
2.
medrxiv; 2020.
Preprint Dans Anglais | medRxiv | ID: ppzbmed-10.1101.2020.09.02.20186502

Résumé

As several countries gradually release social distancing measures, rapid detection of new localised COVID-19 hotspots and subsequent intervention will be key to avoiding large-scale resurgence of transmission. We introduce ASMODEE (Automatic Selection of Models and Outlier Detection for Epidemics), a new tool for detecting sudden changes in COVID-19 incidence. Our approach relies on automatically selecting the best (fitting or predicting) model from a range of user-defined time series models, excluding the most recent data points, to characterise the main trend in an incidence. We then derive prediction intervals and classify data points outside this interval as outliers, which provides an objective criterion for identifying departures from previous trends. We also provide a method for selecting the optimal breakpoints, used to define how many recent data points are to be excluded from the trend fitting procedure. The analysis of simulated COVID-19 outbreaks suggest ASMODEE compares favourably with a state-of-art outbreak-detection algorithm while being simpler and more flexible. We illustrate our method using publicly available data of NHS Pathways reporting potential COVID-19 cases in England at a fine spatial scale, for which we provide a template automated analysis pipeline. ASMODEE is implemented in the free R package trendbreaker.


Sujets)
COVID-19
SÉLECTION CITATIONS
Détails de la recherche